31 research outputs found

    Texture Extraction Techniques for the Classification of Vegetation Species in Hyperspectral Imagery: Bag of Words Approach Based on Superpixels

    Get PDF
    Texture information allows characterizing the regions of interest in a scene. It refers to the spatial organization of the fundamental microstructures in natural images. Texture extraction has been a challenging problem in the field of image processing for decades. In this paper, different techniques based on the classic Bag of Words (BoW) approach for solving the texture extraction problem in the case of hyperspectral images of the Earth surface are proposed. In all cases the texture extraction is performed inside regions of the scene called superpixels and the algorithms profit from the information available in all the bands of the image. The main contribution is the use of superpixel segmentation to obtain irregular patches from the images prior to texture extraction. Texture descriptors are extracted from each superpixel. Three schemes for texture extraction are proposed: codebook-based, descriptor-based, and spectral-enhanced descriptor-based. The first one is based on a codebook generator algorithm, while the other two include additional stages of keypoint detection and description. The evaluation is performed by analyzing the results of a supervised classification using Support Vector Machines (SVM), Random Forest (RF), and Extreme Learning Machines (ELM) after the texture extraction. The results show that the extraction of textures inside superpixels increases the accuracy of the obtained classification map. The proposed techniques are analyzed over different multi and hyperspectral datasets focusing on vegetation species identification. The best classification results for each image in terms of Overall Accuracy (OA) range from 81.07% to 93.77% for images taken at a river area in Galicia (Spain), and from 79.63% to 95.79% for a vast rural region in China with reasonable computation timesThis work was supported in part by the Civil Program UAVs Initiative, promoted by the Xunta de Galicia and developed in partnership with the Babcock Company to promote the use of unmanned technologies in civil services. We also have to acknowledge the support by Ministerio de Ciencia e Innovación, Government of Spain (grant number PID2019-104834GB-I00), and Consellería de Educación, Universidade e Formación Profesional (ED431C 2018/19, and accreditation 2019-2022 ED431G-2019/04). All are cofunded by the European Regional Development Fund (ERDF)S

    GPU Accelerated FFT-Based Registration of Hyperspectral Scenes

    Get PDF
    Registration is a fundamental previous task in many applications of hyperspectrometry. Most of the algorithms developed are designed to work with RGB images and ignore the execution time. This paper presents a phase correlation algorithm on GPU to register two remote sensing hyperspectral images. The proposed algorithm is based on principal component analysis, multilayer fractional Fourier transform, combination of log-polar maps, and peak processing. It is fully developed in CUDA for NVIDIA GPUs. Different techniques such as the efficient use of the memory hierarchy, the use of CUDA libraries, and the maximization of the occupancy have been applied to reach the best performance on GPU. The algorithm is robust achieving speedups in GPU of up to 240.6×This work was supported in part by the Consellería de Cultura, Educacion e Ordenación Universitaria under Grant GRC2014/008 and Grant ED431G/08 and in part by the Ministry of Education, Culture and Sport, Government of Spain under Grant TIN2013-41129-P and Grant TIN2016-76373-P. Both are cofunded by the European Regional Development Fund. The work of A. Ordóñez was supported by the Ministry of Education, Culture and Sport, Government of Spain, under an FPU Grant FPU16/03537S

    Comparing area–based and feature–based methods for co–registration of multispectral bands on GPU

    Get PDF
    This a post-print of the article “Comparing Area-Based and Feature-Based Methods for CoRegistration of Multispectral Bands on GPU” published in the Proceedings of IGARSS 2021 - 2021 IEEE International Geoscience and Remote Sensing SymposiumRegistration is required as a previous step for processing multispectral images. The different bands captured by each sensor for each image, as well as the different images corresponding to the same area, need to be aligned. In this paper, a 2– level registration scheme comparing the results obtained by the hyperspectral Fourier–Mellin (HYFM) and hyperspectral KAZE (HSI–KAZE) registration methods is proposed. It is designed for efficient implementation in a multi-GPU system in which different scenes are registered in parallel on different GPU

    Fourier–Mellin registration of two hyperspectral images

    Get PDF
    Hyperspectral images contain a great amount of information which can be used to more robustly register such images. In this article, we present a phase correlation method to register two hyperspectral images that takes into account their multiband structure. The proposed method is based on principal component analysis, the multilayer fractional Fourier transform, a combination of log-polar maps, and peak processing. The combination of maps is aimed at highlighting some peaks in the log-polar map using information from different bands. The method is robust and has been successfully tested for any rotation angle with commonly used hyperspectral scenes in remote sensing for scales of up to 7.5× and with pairs of hyperspectral images taken on different dates by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor for scales of up to 6.0×This work was supported in part by the Consellería de Cultura, Educación e Ordenación Universitaria [grant numbers GRC2014/008 and ED431G/08] and Ministry of Education, Culture and Sport, Government of Spain [grant numbers TIN2013-41129-P and TIN2016-76373-P] both are co-funded by the European Regional Development Fund (ERDF)S

    Extended Anisotropic Diffusion Profiles in GPU for Hyperspectral Imagery

    Get PDF
    Morphological profiles are a common approach for extracting spatial information from remote sensing hyperspectral images by extracting structural features. Other profiles can be built based on different approaches such as, for example, differential morphological profiles, or attribute profiles. Another technique used for characterizing spatial information on the images at different scales is based on computing profiles relying on edge-preserving filters such as anisotropic diffusion filters. Their main advantage is the preservation of the distinctive morphological features of the images at the cost of an iterative calculation. In this article, the high computational cost associated with the construction of anisotropic diffusion profiles (ADPs) is highly reduced. In particular, we propose a low-cost computational approach for computing ADPs on Nvidia GPUs as well as a detailed characterization of the method, comparing it in terms of accuracy and structural similarity to other existing alternativesThis work was supported in part by the Consellería de Educación, Universidade e Formación Profesional under Grants GRC2014/008, ED431C 2018/19, and ED431G/08, in part by Ministerio de Economía y Empresa, Government of Spain under Grant TIN2016-76373-P, and in part by the European Regional Development FundS

    Surf-Based Registration for Hyperspectral Images

    Get PDF
    The alignment of images, also known as registration, is a relevant task in the processing of hyperspectral images. Among the feature-based registration methods, Speeded Up Robust Features (SURF) has been proposed as a computationally efficient approach. In this paper HSI–SURF is proposed. This is a method to register hyperspectral remote sensing images based on SURF that takes advantage of the full spectral information of the images. In this sense, the proposed method selects specific bands of the images and adapts the keypoint descriptor and the matching stages to benefit from the spectral information, thus increasing the effectiveness of the registration.This work was supported in part by the Consellería de Educación, Universidade e Formación Profesional [grant numbers GRC2014/008, ED431C 2018/19, and ED431G/08] and Ministerio de Economía y Empresa, Government of Spain [grant number TIN2016-76373-P] and by Junta de Castilla y Leon - ERDF (PROPHET Project) [grant number VA082P17]. All are cofunded by the European Regional Development Fund (ERDF). The work of Alvaro Ordóñez was also supported by the Ministerio de Ciencia, Innovación y Universidades, Government of Spain, under a FPU Grant [grant number FPU16/03537

    Dual-Window Superpixel Data Augmentation for Hyperspectral Image Classification

    Get PDF
    Deep learning (DL) has been shown to obtain superior results for classification tasks in the field of remote sensing hyperspectral imaging. Superpixel-based techniques can be applied to DL, significantly decreasing training and prediction times, but the results are usually far from satisfactory due to overfitting. Data augmentation techniques alleviate the problem by synthetically generating new samples from an existing dataset in order to improve the generalization capabilities of the classification model. In this paper we propose a novel data augmentation framework in the context of superpixel-based DL called dual-window superpixel (DWS). With DWS, data augmentation is performed over patches centered on the superpixels obtained by the application of simple linear iterative clustering (SLIC) superpixel segmentation. DWS is based on dividing the input patches extracted from the superpixels into two regions and independently applying transformations over them. As a result, four different data augmentation techniques are proposed that can be applied to a superpixel-based CNN classification scheme. An extensive comparison in terms of classification accuracy with other data augmentation techniques from the literature using two datasets is also shown. One of the datasets consists of small hyperspectral small scenes commonly found in the literature. The other consists of large multispectral vegetation scenes of river basins. The experimental results show that the proposed approach increases the overall classification accuracy for the selected datasets. In particular, two of the data augmentation techniques introduced, namely, dual-flip and dual-rotate, obtained the best resultsThe images of the Galicia dataset were obtained in partnership with the Babcock company, supported in part by the Civil Program UAVs Initiative, promoted by the Xunta de Galicia. This work was supported in part by Ministerio de Ciencia e Innovación, Government of Spain (grant numbers PID2019-104834GB-I00 and BES-2017-080920), and Consellería de Educación, Universidade e Formación Profesional (grant number ED431C 2018/19, and accreditation 2019–2022 ED431G-2019/04). All are co-funded by the European Regional Development Fund (ERDF)S

    Exploring the Registration of Remote Sensing Images using HSI-KAZE in Graphical Units

    Get PDF
    Computational and Mathematical Methods in Science and Engineering (CMMSE), Rota, Cadiz, Spain, 30 June - 6 July 2019 (Session I, Part 5)Registration of hyperspectral remote sensing images is a common task in many image processing applications such as land use classification, environmental monitoring and change detection. The images to be registered present differences as a consequence of being obtained from different points of view, differences in the number of spectral bands captured by the sensors, in illumination and intensity, and also changes in the objects present in the images, among others. Feature-based methods as HSI-KAZE are more efficient at registering than area-based methods when the images are very rich in geometrical details, as it is the case for remote sensing images. But they present, nevertheless, the problem of being computationally more costly because the number of distinctive points to be calculated for these images is high. HSI-KAZE is a method to register hyperspectral remote sensing images based on KAZE features but considering the spectral information. In this work, a robust and efficient implementation of this method on programmable GPUs is presentedThis work was supported in part by the Consellería de Educación, Universidade e Formación Profesional [grant numbers GRC2014/008, ED431C 2018/19, and ED431G/08] and Ministerio de Economía y Empresa, Government of Spain [grant number TIN2016-76373-P] and by Junta de Castilla y Leon - ERDF (PROPHET Project) [grant number VA082P17]. All are co-funded by the European Regional Development Fund (ERDF). The work of Álvaro Ordóñez was also supported by Ministerio de Ciencia, Innovación y Universidades, Government of Spain, under a FPU Grant [grant numbers FPU16/03537 and EST18/00602

    Alignment of Hyperspectral Images Using KAZE Features

    Get PDF
    Image registration is a common operation in any type of image processing, specially in remote sensing images. Since the publication of the scale–invariant feature transform (SIFT) method, several algorithms based on feature detection have been proposed. In particular, KAZE builds the scale space using a nonlinear diffusion filter instead of Gaussian filters. Nonlinear diffusion filtering allows applying a controlled blur while the important structures of the image are preserved. Hyperspectral images contain a large amount of spatial and spectral information that can be used to perform a more accurate registration. This article presents HSI–KAZE, a method to register hyperspectral remote sensing images based on KAZE but considering the spectral information. The proposed method combines the information of a set of preselected bands, and it adapts the keypoint descriptor and the matching stage to take into account the spectral information. The method is adequate to register images in extreme situations in which the scale between them is very different. The effectiveness of the proposed algorithm has been tested on real images taken on different dates, and presenting different types of changes. The experimental results show that the method is robust achieving image registrations with scales of up to 24.0×This research was supported in part by the Consellería de Cultura, Educación e Ordenación Universitaria, Xunta de Galicia [grant numbers GRC2014/008 and ED431G/08] and Ministerio de Educación, Cultura y Deporte [grant number TIN2016-76373-P] both are co–funded by the European Regional Development Fund. The work of Álvaro Ordóñez was supported by the Ministerio de Educación, Cultura y Deporte under an FPU Grant [grant number FPU16/03537]. This work was also partially supported by Consejería de Educación, Junta de Castilla y León (PROPHET Project) [grant number VA082P17]S

    TCANet for Domain Adaptation of Hyperspectral Images

    Get PDF
    The use of Convolutional Neural Networks (CNNs) to solve Domain Adaptation (DA) image classification problems in the context of remote sensing has proven to provide good results but at high computational cost. To avoid this problem, a deep learning network for DA in remote sensing hyperspectral images called TCANet is proposed. As a standard CNN, TCANet consists of several stages built based on convolutional filters that operate on patches of the hyperspectral image. Unlike the former, the coefficients of the filter are obtained through Transfer Component Analysis (TCA). This approach has two advantages: firstly, TCANet does not require training based on backpropagation, since TCA is itself a learning method that obtains the filter coefficients directly from the input data. Second, DA is performed on the fly since TCA, in addition to performing dimensional reduction, obtains components that minimize the difference in distributions of data in the different domains corresponding to the source and target images. To build an operating scheme, TCANet includes an initial stage that exploits the spatial information by providing patches around each sample as input data to the network. An output stage performing feature extraction that introduces sufficient invariance and robustness in the final features is also included. Since TCA is sensitive to normalization, to reduce the difference between source and target domains, a previous unsupervised domain shift minimization algorithm consisting of applying conditional correlation alignment (CCA) is conditionally applied. The results of a classification scheme based on CCA and TCANet show that the DA technique proposed outperforms other more complex DA techniquesThis work was supported in part by Consellería de Educación, Universidade e Formación Profesional [grant numbers GRC2014/008, ED431C 2018/19, and ED431G/08] and Ministerio de Economía y Empresa, GovernmentofSpain[grantnumberTIN2016-76373-P].Allareco–fundedbytheEuropeanRegionalDevelopment Fund (ERDF). This work received financial support from the Xunta de Galicia and the European Union (European Social Fund - ESF)S
    corecore